Hermes Agent主动学习与技能生成机制源码分析
Hermes Agent 主动学习与技能生成机制 — 源码级深度分析报告
分析时间: 2026-04-12
Hermes 版本: v0.8.0 (hermes_agent-0.8.0)
源码路径: /root/.hermes/hermes-agent/
分析人: 宁姚 subagent
一、Hermes 源码目录结构
/root/.hermes/hermes-agent/
├── agent/ # Agent 核心模块
│ ├── memory_manager.py # 记忆管理器(编排层)
│ ├── memory_provider.py # 记忆提供者抽象基类
│ ├── prompt_builder.py # 系统提示构建
│ ├── context_compressor.py # 上下文压缩
│ ├── skill_utils.py # 技能工具函数
│ ├── skill_commands.py # 技能命令
│ └── insights.py # 会话洞察引擎
├── tools/ # 工具实现层
│ ├── memory_tool.py # 记忆工具(MEMORY.md/USER.md)
│ ├── skill_manager_tool.py # 技能管理工具
│ ├── skills_hub.py # 技能中心(来源适配器)
│ ├── skills_guard.py # 技能安全扫描器
│ ├── skills_tool.py # 技能列表/查看工具
│ └── registry.py # 工具注册中心
├── plugins/memory/ # 外部记忆插件
│ ├── honcho/ # Honcho AI 原生记忆
│ ├── holographic/ # 全息记忆
│ ├── mem0/ # Mem0 记忆
│ └── ...
├── run_agent.py # Agent 主运行循环(518KB 核心文件)
├── skills/ # 技能目录(~/.hermes/skills/)
└── cli.py # CLI 入口
二、主动学习机制源码分析
2.1 记忆存储层 — tools/memory_tool.py
核心类: MemoryStore
class MemoryStore:
"""
Bounded curated memory with file persistence. One instance per AIAgent.
Maintains two parallel states:
- _system_prompt_snapshot: frozen at load time, used for system prompt injection.
Never mutated mid-session. Keeps prefix cache stable.
- memory_entries / user_entries: live state, mutated by tool calls, persisted to disk.
Tool responses always reflect this live state.
"""
def __init__(self, memory_char_limit: int = 2200, user_char_limit: int = 1375):
self.memory_entries: List[str] = []
self.user_entries: List[str] = []
self.memory_char_limit = memory_char_limit # MEMORY.md 字符上限
self.user_char_limit = user_char_limit # USER.md 字符上限
# Frozen snapshot for system prompt — set once at load_from_disk()
self._system_prompt_snapshot: Dict[str, str] = {"memory": "", "user": ""}
关键设计点:
- 双状态并行:
_system_prompt_snapshot(冻结快照,用于系统提示注入)+memory_entries/user_entries(实时状态,工具调用修改) - 快照冻结:
load_from_disk()时捕获快照,会话期间不会改变。这保证了系统提示的前缀缓存稳定性 - 文件锁 + 原子写入: 使用
fcntl.flock+tempfile.mkstemp+os.replace确保并发安全 - 字符限制(非 token): MEMORY.md 默认 2200 字符(~800 tokens),USER.md 1375 字符(~500 tokens)
- 条目分隔符:
§(section sign),条目支持多行
注入机制 — 系统提示构建:
# run_agent.py 第 2993-3005 行
if self._memory_store:
if self._memory_enabled:
mem_block = self._memory_store.format_for_system_prompt("memory")
if mem_block:
prompt_parts.append(mem_block)
if self._user_profile_enabled:
user_block = self._memory_store.format_for_system_prompt("user")
if user_block:
prompt_parts.append(user_block)
冻结快照渲染:
def format_for_system_prompt(self, target: str) -> Optional[str]:
"""
Return the frozen snapshot for system prompt injection.
This returns the state captured at load_from_disk() time, NOT the live state.
Mid-session writes do not affect this. This keeps the system prompt stable
across all turns, preserving the prefix cache.
"""
block = self._system_prompt_snapshot.get(target, "")
return block if block else None
渲染格式:
════════════════════════════════════════════════
MEMORY (your personal notes) [45% — 990/2,200 chars]
════════════════════════════════════════════════
[条目1内容]
§
[条目2内容]
2.2 触发条件 — 计数器机制
核心计数器(run_agent.py 第 1109-1112 行):
self._memory_nudge_interval = 10 # 每 10 个用户轮次触发一次记忆审查
self._memory_flush_min_turns = 6 # 压缩前至少 6 轮才触发 flush
self._turns_since_memory = 0 # 距离上次记忆操作的轮次计数
self._iters_since_skill = 0 # 距离上次技能操作的迭代计数
self._skill_nudge_interval = 15 # 每 15 次工具迭代触发一次技能审查
记忆触发逻辑(run_agent.py 第 7486-7493 行):
_should_review_memory = False
if (self._memory_nudge_interval > 0
and "memory" in self.valid_tool_names
and self._memory_store):
self._turns_since_memory += 1 # 每轮 +1
if self._turns_since_memory >= self._memory_nudge_interval:
_should_review_memory = True # 达到阈值,触发审查
self._turns_since_memory = 0 # 重置计数
技能触发逻辑(run_agent.py 第 9959-9963 行):
_should_review_skills = False
if (self._skill_nudge_interval > 0
and self._iters_since_skill >= self._skill_nudge_interval
and "skill_manage" in self.valid_tool_names):
_should_review_skills = True # 达到阈值,触发审查
self._iters_since_skill = 0 # 重置计数
计数器重置(run_agent.py 第 6594-6597, 6815-6817 行):
# 当 memory 工具被实际调用时
if function_name == "memory":
self._turns_since_memory = 0
# 当 skill_manage 工具被实际调用时
elif function_name == "skill_manage":
self._iters_since_skill = 0
2.3 记忆提炼算法 — 后台审查线程
核心方法: _spawn_background_review()(run_agent.py 第 1958-2045 行)
触发后,Hermes 不会在主会话中审查,而是 fork 一个独立的后台 Agent 来执行审查:
def _spawn_background_review(
self,
messages_snapshot: List[Dict],
review_memory: bool = False,
review_skills: bool = False,
) -> None:
"""Spawn a background thread to review the conversation for memory/skill saves.
Creates a full AIAgent fork with the same model, tools, and context as the main session.
The review prompt is appended as the next user turn in the forked conversation.
Writes directly to the shared memory/skill stores.
Never modifies the main conversation history or produces user-visible output.
"""
审查提示词(三种模式):
_MEMORY_REVIEW_PROMPT = (
"Review the conversation above and consider saving to memory if appropriate.\n\n"
"Focus on:\n"
"1. Has the user revealed things about themselves — their persona, desires, "
"preferences, or personal details worth remembering?\n"
"2. Has the user expressed expectations about how you should behave, their work "
"style, or ways they want you to operate?\n\n"
"If something stands out, save it using the memory tool. "
"If nothing is worth saving, just say 'Nothing to save.' and stop."
)
_SKILL_REVIEW_PROMPT = (
"Review the conversation above and consider saving or updating a skill if appropriate.\n\n"
"Focus on: was a non-trivial approach used to complete a task that required trial "
"and error, or changing course due to experiential findings along the way, or did "
"the user expect or desire a different method or outcome?\n\n"
"If a relevant skill already exists, update it with what you learned. "
"Otherwise, create a new skill if the approach is reusable.\n"
"If nothing is worth saving, just say 'Nothing to save.' and stop."
)
_COMBINED_REVIEW_PROMPT = ( # 当两个触发器同时激活时使用)
后台 Agent 配置:
review_agent = AIAgent(
model=self.model, # 使用相同的模型
max_iterations=8, # 最多 8 次工具迭代
quiet_mode=True, # 静默模式
platform=self.platform,
provider=self.provider,
)
# 共享记忆存储(直接写入主存储)
review_agent._memory_store = self._memory_store
review_agent._memory_enabled = self._memory_enabled
# 禁用 nudge(避免递归触发)
review_agent._memory_nudge_interval = 0
review_agent._skill_nudge_interval = 0
2.4 上下文压缩时的记忆提取 — flush_memories()
核心方法(run_agent.py 第 6201-6280 行)
在上下文压缩 之前,Hermes 会注入一个 flush 消息,给模型一次机会保存记忆:
def flush_memories(self, messages: list = None, min_turns: int = None):
"""Give the model one turn to persist memories before context is lost.
Called before compression, session reset, or CLI exit.
"""
flush_content = (
"[System: The session is being compressed. "
"Save anything worth remembering — prioritize user preferences, "
"corrections, and recurring patterns over task-specific details.]"
)
flush_msg = {"role": "user", "content": flush_content, "_flush_sentinel": _sentinel}
messages.append(flush_msg)
# 只使用 memory 工具进行 API 调用
memory_tool_def = None
for t in (self.tools or []):
if t.get("function", {}).get("name") == "memory":
memory_tool_def = t
break
# 使用辅助客户端(更便宜)进行 flush 调用
response = _call_llm(
task="flush_memories",
messages=api_messages,
tools=[memory_tool_def],
temperature=0.3,
max_tokens=5120,
)
2.5 外部记忆插件 — MemoryProvider 抽象基类
生命周期钩子(agent/memory_provider.py):
class MemoryProvider(ABC):
def initialize(self, session_id: str, **kwargs) -> None: ...
def system_prompt_block(self) -> str: ... # 静态系统提示
def prefetch(self, query: str, **kw) -> str: ... # 每轮预取
def queue_prefetch(self, query: str, **kw) -> None: ... # 队列化预取
def sync_turn(self, user, asst, **kw) -> None: ... # 每轮同步
def on_turn_start(self, turn, msg, **kw) -> None: ...
def on_session_end(self, messages) -> None: ... # 会话结束
def on_pre_compress(self, messages) -> str: ... # 压缩前
def on_memory_write(self, action, target, content) -> None: ... # 内置记忆写入时
def on_delegation(self, task, result, **kw) -> None: ... # 子任务完成
关键约束: 只允许 一个 外部记忆提供者 + 内置提供者(MemoryManager.add_provider() 强制执行)。
三、技能生成机制源码分析
3.1 技能管理工具 — tools/skill_manager_tool.py
工具 Schema(给 LLM 看的描述):
SKILL_MANAGE_SCHEMA = {
"name": "skill_manage",
"description": (
"Manage skills (create, update, delete). Skills are your procedural "
"memory — reusable approaches for recurring task types.\n\n"
"Create when: complex task succeeded (5+ calls), errors overcome, "
"user-corrected approach worked, non-trivial workflow discovered, "
"or user asks you to remember a procedure.\n"
"Update when: instructions stale/wrong, OS-specific failures, "
"missing steps or pitfalls found during use.\n\n"
"After difficult/iterative tasks, offer to save as a skill. "
"Skip for simple one-offs. Confirm with user before creating/deleting."
),
# ... parameters: action, name, content, old_string, new_string, ...
}
触发条件(系统提示中的指导):
SKILLS_GUIDANCE = (
"After completing a complex task (5+ tool calls), fixing a tricky error, "
"or discovering a non-trivial workflow, save the approach as a "
"skill with skill_manage so you can reuse it next time.\n"
"When using a skill and finding it outdated, incomplete, or wrong, "
"patch it immediately with skill_manage(action='patch') — don't wait to be asked."
)
计数器触发:
- 每 15 次工具迭代(
creation_nudge_interval: 15)触发后台技能审查 _iters_since_skill在每次工具调用后 +1,在skill_manage被实际调用时重置为 0
3.2 技能创建流程
def _create_skill(name: str, content: str, category: str = None) -> Dict[str, Any]:
# 1. 验证名称(小写、连字符、下划线,最长 64 字符)
err = _validate_name(name)
# 2. 验证分类(可选的单层目录)
err = _validate_category(category)
# 3. 验证 frontmatter(必须有 YAML --- 分隔、name、description 字段)
err = _validate_frontmatter(content)
# 4. 验证内容大小(最大 100,000 字符)
err = _validate_content_size(content)
# 5. 检查名称冲突(跨所有技能目录)
existing = _find_skill(name)
# 6. 创建目录 + 原子写入 SKILL.md
skill_dir = _resolve_skill_dir(name, category)
_atomic_write_text(skill_md, content)
# 7. 安全扫描 — 如果被阻止则回滚
scan_error = _security_scan_skill(skill_dir)
if scan_error:
shutil.rmtree(skill_dir, ignore_errors=True)
3.3 安全机制 — tools/skills_guard.py
信任等级和安装策略:
TRUSTED_REPOS = {"openai/skills", "anthropics/skills"}
INSTALL_POLICY = {
# safe caution dangerous
"builtin": ("allow", "allow", "allow"),
"trusted": ("allow", "allow", "block"),
"community": ("allow", "block", "block"),
"agent-created": ("allow", "allow", "ask"), # Agent 创建的技能:dangerous 需要用户确认
}
威胁模式检测(正则扫描):
THREAT_PATTERNS = [
# 外泄:shell 命令泄露密钥
(r'curl\s+[^\n]*\$\{?\w*(KEY|TOKEN|SECRET|PASSWORD|CREDENTIAL|API)', ...),
# 注入:提示词注入
(r'ignore\s+(previous|all|above|prior)\s+instructions', ...),
# 破坏性命令
(r'rm\s+-rf\s+/', ...),
# 持久化后门
(r'authorized_keys', ...),
# 混淆
(r'base64\s+-d\s*\|', ...),
# ... 更多模式
]
3.4 技能目录结构
~/.hermes/skills/
├── my-skill/
│ ├── SKILL.md # 必须:YAML frontmatter + markdown body
│ ├── references/ # 可选:参考资料
│ ├── templates/ # 可选:模板
│ ├── scripts/ # 可选:脚本
│ └── assets/ # 可选:资源
└── category-name/
└── another-skill/
└── SKILL.md
SKILL.md frontmatter 格式:
---
name: skill-name
description: "What this skill does (max 1024 chars)"
version: 1.0.0
metadata:
hermes:
tags: [tag1, tag2]
related_skills: []
---
# 技能标题
## 概述
...
## 前提条件
...
## 工作流程
...
四、触发条件和算法详解
4.1 记忆触发时间线
用户轮次 1 → _turns_since_memory = 1
用户轮次 2 → _turns_since_memory = 2
...
用户轮次 10 → _turns_since_memory = 10 >= nudge_interval(10)
→ _should_review_memory = True
→ _turns_since_memory = 0
→ spawn 后台 Agent 审查
→ 后台 Agent 使用 memory 工具添加条目
→ 条目写入 MEMORY.md/USER.md(实时状态更新)
→ 系统提示中的冻结快照 **不变**(下次会话才生效)
4.2 技能触发时间线
工具迭代 1 → _iters_since_skill = 1
工具迭代 2 → _iters_since_skill = 2
...
工具迭代 15 → _iters_since_skill = 15 >= skill_nudge_interval(15)
→ _should_review_skills = True
→ _iters_since_skill = 0
→ spawn 后台 Agent 审查
→ 后台 Agent 使用 skill_manage 创建/更新技能
4.3 主动触发(模型驱动)
除了计数器触发,模型在系统提示的指导下可以主动调用 memory 和 skill_manage 工具:
- 记忆: “When the user corrects you or says ‘remember this’…” → 模型主动调用
memory(action="add", ...) - 技能: “After completing a complex task (5+ tool calls)…” → 模型主动调用
skill_manage(action="create", ...) - 调用后
_turns_since_memory/_iters_since_skill重置为 0
4.4 压缩前 Flush
上下文压缩触发
→ flush_memories() 注入 flush 消息
→ 单独的 API 调用(只使用 memory 工具)
→ temperature=0.3, max_tokens=5120
→ 模型决定是否需要保存记忆
→ 执行完毕后删除 flush 消息(不污染会话历史)
五、与 OpenClaw 的代码级对比
| 维度 | Hermes | OpenClaw |
|---|---|---|
| 记忆存储 | MemoryStore 类,MEMORY.md + USER.md,字符限制 | 文件系统直接读写,无封装类,无字符限制 |
| 记忆注入 | 冻结快照模式(load_from_disk 时捕获,会话中不变) | 每次构建系统提示时实时读取文件 |
| 记忆触发 | 计数器(每 N 轮)+ 模型主动 + 压缩前 flush | 仅模型主动(SOUL.md/AGENTS.md 指导) |
| 记忆格式 | § 分隔的条目,有字符预算 | 自由格式 markdown |
| 安全扫描 | 内容注入检测(正则匹配提示词注入、外泄等) | 无内置安全扫描 |
| 技能管理 | skill_manage 工具(create/edit/patch/delete/write_file/remove_file) | 无专用工具,依赖文件操作 |
| 技能触发 | 计数器(每 N 次迭代)+ 模型主动 | 仅模型主动 |
| 技能安全 | skills_guard.py 正则扫描 + 信任等级策略 | 无内置安全扫描 |
| 后台审查 | Fork 独立 Agent 线程执行审查 | 无后台审查机制 |
| 压缩前 Flush | flush_memories() 在压缩前给模型一次保存机会 | 无此机制 |
| 外部记忆插件 | 支持 Honcho/Holographic/Mem0 等 6+ 插件 | 无插件架构 |
| 前缀缓存 | 冻结快照确保系统提示不变,最大化缓存命中 | 每次读取文件,系统提示可能变化 |
六、关键发现总结
6.1 Hermes 的核心创新点
- 冻结快照模式: 系统提示在会话开始时冻结,会话中的记忆写入不影响当前提示,最大化前缀缓存命中
- 后台审查 Agent: 不在主会话中审查,而是 fork 独立 Agent 线程,避免干扰用户任务
- 计数器 + 模型主动双触发: 既有机械的计数器保证定期审查,又通过系统提示引导模型主动学习
- 压缩前 Flush: 在上下文被压缩前给模型最后一次保存机会,防止记忆丢失
- 安全扫描: 所有技能(包括 Agent 创建的)都经过正则安全扫描,防止注入和外泄
6.2 OpenClaw 可直接复刻的关键点
| 优先级 | 特性 | 复刻难度 | 影响 |
|---|---|---|---|
| P0 | 冻结快照模式 | 低 | 最大化前缀缓存,减少 token 消耗 |
| P0 | 压缩前 Flush | 低 | 防止上下文压缩时丢失记忆 |
| P1 | 计数器触发 | 中 | 定期审查,确保记忆不被遗忘 |
| P1 | 后台审查 Agent | 中 | 不干扰主会话的主动学习 |
| P2 | 安全扫描 | 中 | 防止技能注入和数据外泄 |
| P2 | 外部记忆插件 | 高 | 支持多种记忆后端 |
归档时间: 2026-04-14
来源: /root/.openclaw/workspace-ny/hermes-source-analysis-report.md